iT邦幫忙

2021 iThome 鐵人賽

DAY 24
0
自我挑戰組

從雲端開始的菜鳥任務系列 第 24

Day24 讓你的k8s Pod 具備多介面功能 - 實做篇

  • 分享至 

  • xImage
  •  

昨天簡述了關於Multus CNI的使用需求和架構,今天我們來介紹Multus的環境建置和測試,此次使用OpenvSwitch(ovs)做而外介面的測試。

ps.本篇是一篇無情的實做文章

Multus

可以於其github上做詳細的檔案

ps.如k8s的版本低於16,於github上尋找pre-1.16.yml系列的檔案佈署,此區使用Kubernetes v1.18.15
*佈署Multus
multus-daemonset.yml

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: network-attachment-definitions.k8s.cni.cncf.io
spec:
  group: k8s.cni.cncf.io
  scope: Namespaced
  names:
    plural: network-attachment-definitions
    singular: network-attachment-definition
    kind: NetworkAttachmentDefinition
    shortNames:
    - net-attach-def
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          description: 'NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing
            Working Group to express the intent for attaching pods to one or more logical or physical
            networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec'
          type: object
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this represen
                tation of an object. Servers should convert recognized schemas to the
                latest internal value, and may reject unrecognized values. More info:
                https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: 'NetworkAttachmentDefinition spec defines the desired state of a network attachment'
              type: object
              properties:
                config:
                  description: 'NetworkAttachmentDefinition config is a JSON-formatted CNI configuration'
                  type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
rules:
  - apiGroups: ["k8s.cni.cncf.io"]
    resources:
      - '*'
    verbs:
      - '*'
  - apiGroups:
      - ""
    resources:
      - pods
      - pods/status
    verbs:
      - get
      - update
  - apiGroups:
      - ""
      - events.k8s.io
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: multus
subjects:
- kind: ServiceAccount
  name: multus
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
      "name": "multus-cni-network",
      "type": "multus",
      "capabilities": {
        "portMappings": true
      },
      "delegates": [
        {
          "cniVersion": "0.3.1",
          "name": "default-cni-network",
          "plugins": [
            {
              "type": "flannel",
              "name": "flannel.1",
                "delegate": {
                  "isDefaultGateway": true,
                  "hairpinMode": true
                }
              },
              {
                "type": "portmap",
                "capabilities": {
                  "portMappings": true
                }
              }
          ]
        }
      ],
      "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        # ppc64le support requires multus:latest for now. support 3.3 or later.
        image: docker.io/nfvpe/multus:stable-ppc64le
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-arm64v8
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable-arm64v8
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf

ovs建立和環境

  • 首先先安裝ovs的package
sudo apt update && sudo apt install openvswitch-switch -y
  • 加入Bridge
#ovs-vsctl add-br <Bridge Name>
ovs-vsctl add-br br0

ps.刪除為ovs-vsctl del-br

  • 建立CNI內容
    ovs-cni.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ovs-cni-amd64
  namespace: kube-system
  labels:
    tier: node
    app: ovs-cni
spec:
  selector:
    matchLabels:
      app: ovs-cni
  template:
    metadata:
      labels:
        tier: node
        app: ovs-cni
      annotations:
        description: OVS CNI allows users to attach their Pods/VMs to Open vSwitch bridges available on nodes
    spec:
      serviceAccountName: ovs-cni-marker
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      initContainers:
      - name: ovs-cni-plugin
        image: quay.io/kubevirt/ovs-cni-plugin:latest
        command: ['cp', '/ovs', '/host/opt/cni/bin/ovs']
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        volumeMounts:
        - name: cnibin
          mountPath: /host/opt/cni/bin
      containers:
      - name: ovs-cni-marker
        image: quay.io/kubevirt/ovs-cni-marker:latest
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        args:
          - -node-name
          - $(NODE_NAME)
          - -ovs-socket
          - /host/var/run/openvswitch/db.sock
        volumeMounts:
          - name: ovs-var-run
            mountPath: /host/var/run/openvswitch
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
      volumes:
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: ovs-var-run
          hostPath:
            path: /var/run/openvswitch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ovs-cni-marker-cr
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/status
  verbs:
  - get
  - update
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ovs-cni-marker-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ovs-cni-marker-cr
subjects:
- kind: ServiceAccount
  name: ovs-cni-marker
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ovs-cni-marker
  namespace: kube-system
  • 建立OVS的CRD

ovs-crd.yaml

cat <<EOF >./ovs-net-crd.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ovs-net
  annotations:
    k8s.v1.cni.cncf.io/resourceName: ovs-cni.network.kubevirt.io/<Bridge Name>
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "ovs",
      "bridge": "<Bridge Name>"
    }'
EOF
  • 最後建立CNI Network Controller

unix-daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: network-controller-server-unix
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: network-controller-server-unix
  template:
    metadata:
      labels:
        name: network-controller-server-unix
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: network-controller-server-unix
        image: sdnvortex/network-controller:v0.4.9
        securityContext:
          privileged: true
        command: ["/go/bin/server"]
        args: ["-unix=/tmp/vortex.sock", "-netlink-gc"]
        volumeMounts:
        - mountPath: /var/run/docker/netns:shared
          name: docker-ns
          #mountPropagation: Bidirectional
        - mountPath: /var/run/docker.sock
          name: docker-sock
        - mountPath: /var/run/openvswitch/db.sock
          name: ovs-sock
        - mountPath: /tmp/
          name: grpc-sock
      volumes:
      - name: docker-ns
        hostPath:
          path: /run/docker/netns
      - name: docker-sock
        hostPath:
          path: /run/docker.sock
      - name: ovs-sock #since the UNIX version only add-port, the db.sock is enough
        hostPath:
          path: /run/openvswitch/db.sock
      - name: grpc-sock
        hostPath:
          path: /tmp/vortex
      hostNetwork: true

測試

  • 最後做一個測試,確認有新增網路介面
    test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ubuntu-deployment
spec:
  selector:
    matchLabels:
      app: ubuntu
  replicas: 1
  template:
    metadata:
      labels:
        app: ubuntu
    spec:
      containers:
      - name: ubuntu-container
        image: ubuntu:20.04
        args: [bash, -c, 'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 100; done']
      initContainers:
      - name: init-network-client
        image: sdnvortex/network-controller:v0.4.9
        command: ["/go/bin/client"]
        args: ["-s=unix:///tmp/vortex.sock", "-b=br0", "-n=eth1", "-i=192.168.2.160/23"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_UUID
          valueFrom:
            fieldRef:
              fieldPath: metadata.uid
        volumeMounts:
        - mountPath: /tmp/
          name: grpc-sock
      volumes:
      - name: grpc-sock
        hostPath:
          path: /tmp/vortex/
  • 結果:
    pod的情況

    進入pod觀看網路介面

結論

我們建立了第二介面之後,可以通過第二介面與其他的pod做通訊,因為我使用ovs的方式,因此只要是通過ovs的方式走的網路都可以通,不需要理會namespace的切割情況。


上一篇
Day23 讓你的k8s Pod 具備多介面功能 - 介紹篇
下一篇
Day25 有關 MANO 輕鬆使用 - 簡介篇
系列文
從雲端開始的菜鳥任務30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言